От: fpga journal update [news@fpgajournal.com]
Отправлено: 5 апреля 2005 г. 23:58
Кому: Michael Dolinsky
Тема: FPGA Journal Update Vol VII No 1


a techfocus media publication :: April 5, 2005 :: volume VII, no. 01


FROM THE EDITOR

This week we delve into the exotic and mysterious world of supercomputing with a look at the new Cray XD1, where FPGAs as reconfigurable c
ompute engines are gaining a foothold as key enablers in accelerating performance-critical algorithms. With a little time investment to move intensive data crunching into FPGA hardware, we can gain several orders of magnitude of compute performance over conventional processor-based approaches.

Our new contributed article this week is from Lauro Rizzatti at Emulation and Verification Engineering (EVE) on leveraging FPGA-based verification systems to catch bugs in complex system-on-chip (SoC) designs. With FPGAs accelerating the verification process, you can gain a significant advantage in debugging embedded software and hardware-software interfaces as well as dramatically increasing the number of vectors you can pump through your hardware design.


Thanks for reading! If there's anything we can do to make our publications more useful to you, please let us know at: comments@fpgajournal.com

Kevin Morris – Editor
FPGA and Programmable Logic Journal


LATEST NEWS

April 5, 2005

IPWireless Gains Performance/Cost Reduction for Its 3G-UMTS Base Station Using LSI Logic RapidChip(R) Platform ASIC

Altera Completes Rollout of MAX II CPLD Family

RT-LAB Pentium-M-Based Real-Time Simulator for Embedded Systems Development

HARDI Electronics Expands Sales Team; Adds Three Independent Representatives in Western United States

April 4, 2005

Lattice, Mouser Electronics Sign Distribution Agreement 

Magma's Cobra Products Now Available; 18-Month Development Delivers New Products, Expanded Capabilities for 90-nm and 65-nm Design

Xilinx Uses the Precision Synthesis Tool from Mentor Graphics in its Advanced FPGA Design Training Courses

ProDesign Joins Synopsys in-Sync Program to Enable Unified ASIC Prototyping and Verification; Common ASIC Flow Optimized for Customer Productivity

April 2, 2005

Portable RT-LAB Pentium-M Based Hybrid-Vehicle Electrical Real-Time Simulator

April 1, 2005

Customer Bulletin: Xilinx Ships Virtex-4 LX200: World's Highest Density FPGA

March 31, 2005

Mercury Computer Systems Enables High-Speed Data Movement for FPGA-Based Applications with FDK 2.0

March 30, 2005

Demand for Semiconductor Technologies in Medical Imaging Modalities on the Rise

Actel and Prover Technology Announce Equivalence Checking Support for Actel Design Flows

Research and Markets: FCSP Market Set to Grow from $92.8 million in 2004 to $476.9 million by 2009

Visit Techfocus Media

CURRENT FEATURE ARTICLES

Cray Goes FPGA
Algorithm Acceleration in the New XD1
The Real Fear Factor
by Lauro Rizzatti, EVE-USA Emulation and Verification Engineering (EVE)
Clock Watching

Unraveling Complex Clocking
Free Tool Friday
How Good are FPGA Vendor Tools?
Deeply Embedded
ESC 2005 - the FPGA View
Two Bucks
Xilinx Introduces Spartan-3E
Plug and Play Design Methodologies for FPGA-based Signal Processing
by Narinder Lall, Xilinx, Inc. and
Eric Cigan, AccelChip, Inc.

Lattice Launches XP
Non-Volatility at the Forefront of FPGA

Cray Goes FPGA
Algorithm Acceleration in the New XD1

When I was in college, I knew the future of supercomputing. The supercomputers of the 21 st century would be massive, gleaming masterpieces of technology. They would not be installed into buildings, but rather buildings would be designed and constructed around them - particularly to house the cooling systems. The design specifics were fuzzy, but I was reasonably sure that very low temperatures would be involved for either superconducting connectivity, SQUIDs, or Josephson junction-esque switching. Silicon would certainly have been long abandoned in favor of Gallium Arsenide or some even more exotic semiconductor material. I believed that Cray, Inc., as the preeminent developer of supercomputers, would be able to leverage these techniques to gain perhaps a full order of magnitude of computing performance over the machines of the day.

A few years later, when Xilinx rolled out their first FPGAs, I could see the future of that technology as well. FPGAs would act as a sort of system-level silicon super-glue, sitting at the periphery of the circuit board and stitching together incompatible protocols. With the simple addition of an FPGA, anything could be made to connect to anything else, and programmability insured that we could adapt on the fly and change our design to leverage any new, improved component without having to abandon the rest of our legacy design.

As I gazed into my crystal ball (looking way out past the distorted reflection of my feathered hair, Lacoste polo shirt, and wayfarer sunglasses), I could not envision any connection between these two seemingly unrelated technology tracks. Supercomputers would be designed and built from the ground up, using carefully matched and optimized homogenous components, while FPGAs would be the duct-tape of electronics design, helping to hold together aging multi-generational systems for a few more years of life in the field before they were retired altogether. In my crystal ball, the two paths were obviously diverging.

I was right about the Cray part. [more]


The Real Fear Factor
by Lauro Rizzatti, EVE-USA Emulation and Verification Engineering (EVE)

Dealing with mass quantities of unsavory bugs is commonplace in both reality TV and modern-day chip design. And, the "fear factor" for both is the same, too: failure to do so in the time specified can put an end to a contestant’s 15 minutes of fame or cost a designer his or her job.

In the era of the system on chip (SoC), dealing with tens of millions of gates is a given for hardware design teams, but the time-to-market game is getting more complex and fraught with danger due to the explosion in embedded software. According to several design teams in the midst of budgeting for their next design, the software portion of an SoC is on an annual growth rate of 140 percent. Hardware is expanding just 40 percent year to year.

If this is today's reality, how are design teams able to keep up? And, are there any more practical ways to debug and verify embedded parts of the design before the hardware is done?

In this new reality, there are three alternatives. The first is a software-based development environment that models hardware in C at a high level of abstraction, or above the register transfer level (RTL) level. While it's fast, it suffers two drawbacks. First, creating the model is time consuming and access to a complete library of ready-made models is beyond reach. Second, it is not suitable to verify the integration between the embedded software and the underling SoC hardware because of the difficulty of accurately modeling hardware at this high level of abstraction. [more]


You're receiving this newsletter because you subscribed at our web site www.fpgajournal.com.
If someone forwarded this newsletter to you and you'd like to receive your own free subscription, go to: www.fpgajournal.com/update.
If at any time, you would like to unsubscribe, send e-mail to unsubscribe@fpgajournal.com. (But we hope you don't.)
If you have any questions or comments, send them to comments@fpgajournal.com.

All material copyright © 2003-2005 techfocus media, inc. All rights reserved.
Privacy Statement